jackd1 is mostly superseded in favour of jackd2, and as far as I understand,
can be ignored
pipewire-jack integrates well with pipewire and the rest of the Linux audio
world
jackd2 is the native JACK server. When started it handles the sound card
directly, and will steal it from pipewire. Non-JACK audio applications will
likely cease to see the sound card until JACK is stopped and wireplumber is
restarted. Pipewire should be able to keep working as a JACK client
but I haven't gone down that route yet
pipewire-jack mostly works. At some point I experienced glitches in complex
JACK apps like giada or ardour
that went away after switching to jackd2. I have not investigated further
into the glitches
So: try things with pw-jack. If you see odd glitches, try without pw-jack to
use the native jackd2. Keep in mind, if you do so, that you will lose
standard pipewire until you stop jackd2 and restart wireplumber.
I recently mentioned on the internet I did work in this direction and a friend of mine asked me to write a blogpost on this. I didn t blog for a long time (keeping all the goodness for myself hehe), so here we go. To set the scene, let s assume we want to make an exectuable binary for x86_64 Linux that s supposed to be extremely portable. It should work on both Debian and Arch Linux. It should work on systems without glibc like Alpine Linux. It should even work in a FROM scratch Docker container. In a more serious setting you would statically link musl-libc with your Rust program, but today we re in a silly-goofy mood so we re going to try to make this work without a libc. And we re also going to use Rust for this, more specifically the stable release channel of Rust, so this blog post won t use any nightly-only features that might still change/break. If you re using a Rust 1.0 version that was recent at the time of writing or later (>= 1.68.0 according to my computer), you should be able to try this at home just fine .
This tutorial assumes you have no prior programming experience in any programming language, but it s going to involve some x86_64 assembly. If you already know what a syscall is, you ll be just fine. If this is your first exposure to programming you might still be able to follow along, but it might be a wild ride.
If you haven t already, install rustup (possibly also available in your package manager, who knows?)
# when asked, press enter to confirm default settings
curl --proto'=https'--tlsv1.2 -sSf https://sh.rustup.rs sh
This is going to install everything you need to use Rust on Linux (this tutorial assumes you re following along on Linux btw). Usually it s still using a system linker (by calling the cc binary, and errors out if none is present), but instead we re going to use rustup to install an additional target:
rustup target add x86_64-unknown-none
I don t know if/how this is made available by Linux distributions, so I recommend following along with rust installed from rustup.
Anyway, we re creating a new project with cargo, this creates a new directory that we can then change into (you might ve done this before):
cargo new hack-the-planet
cd hack-the-planet
There s going to be a file named Cargo.toml, we don t need to make any changes there, but the one that was auto-generated for me at the time of writing looks like this:
[package]name="hack-the-planet"version="0.1.0"edition="2021"# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html[dependencies]
There s a second file named src/main.rs, it s going to contain some pre-generated hello world, but we re going to delete it and create a new, empty file:
rm src/main.rs
touch src/main.rs
Alrighty, leaving this file empty is not valid but we re going to walk through the individual steps so we re going to try to build with an empty file first. At this point I would like to credit this chapter of a fasterthanli.me series and a blogpost by Philipp Oppermann, this tutorial is merely an 2023 update and makes it work with stable Rust. Let s run the build:
$ cargo build --release --target x86_64-unknown-none
Compiling hack-the-planet v0.1.0 (/hack-the-planet)
error[E0463]: can't find crate for std
= note: the x86_64-unknown-none target may not support the standard library
= note: std is required by hack_the_planet because it does not declare #![no_std]
error[E0601]: main function not found in crate hack_the_planet
= note: consider adding a main function to src/main.rs
Some errors have detailed explanations: E0463, E0601.
For more information about an error, try rustc --explain E0463 .
error: could not compile hack-the-planet due to 2 previous errors
Since this doesn t use a libc (oh right, I forgot to mention this up to this point actually), this also means there s no std standard library. Usually the standard library of Rust still uses the system libc to do syscalls, but since we specify our libc as none this means std won t be available (use std::fs::rename won t work). There are still other functions we can use and import, for example there s core that s effectively a second standard library, but much smaller.
To opt-out of the std standard library, we can put #![no_std] into src/main.rs:
Rust noticed we didn t define a main function and suggest we add one. This isn t what we want though so we ll politely decline and inform Rust we don t have a main and it shouldn t attempt to call it. We re adding #![no_main] to our file and src/main.rs now looks like this:
#![no_std]#![no_main]
Running the build again:
$ cargo build
Compiling hack-the-planet v0.1.0 (/hack-the-planet)
error: #[panic_handler] function required, but not found
error: language item required, but not found: eh_personality
= note: this can occur when a binary crate with #![no_std] is compiled for a target where eh_personality is defined in the standard library
= help: you may be able to compile for a target that doesn't need eh_personality , specify a target with --target or in .cargo/config
error: could not compile hack-the-planet due to 2 previous errors
Rust is asking us for a panic handler, basically I m going to jump to this address if something goes terribly wrong and execute whatever you put there . Eventually we would put some code there to just exit the program, but for now an infinitely loop will do. This is likely going to get stripped away anyway by the compiler if it notices our program has no code-branches leading to a panic and the code is unused. Our src/main.rs now looks like this:
$ objdump -d target/x86_64-unknown-none/release/hack-the-planet
target/x86_64-unknown-none/release/hack-the-planet: file format elf64-x86-64
Ok that looks pretty from scratch to me . The file contains no cpu instructions. Also note how our infinity loop is not present (as predicted).
Making a basic program and executing it
Ok let s try to make a valid program that basically just cleanly exits. First let s try to add some cpu instructions and verify they re indeed getting executed. Lemme introduce, the INT 3 instruction in x86_64 assembly. In binary it s also known as the 0xCC opcode. It crashes our program in a slightly different way, so if the error message changes, we know it worked. The other tutorials use a #[naked] function for the entry point, but since this feature isn t stabilized at the time of writing we re going to use the global_asm! macro. Also don t worry, I m not going to introduce every assembly instruction individually. Our program now looks like this:
The error message of the crash is now slightly different because it s hitting our breakpoint cpu instruction. Funfact btw, if you run this in strace you can see this isn t making any system calls (aka not talking to the kernel at all, it just crashes):
Let s try to make a program that does a clean shutdown. To do this we inform the kernel with a system call that we may like to exit. We can get more info on this with man 2 exit and it defines exit like this:
[[noreturn]] void _exit(int status);
On Linux this syscall is actually called _exit and exit is implemented as a libc function, but we don t care about any of that today, it s going to do the job just fine. Also note how it takes a single argument of type int. In C-speak this means signed 32 bit , i32 in Rust.
Next we need to figure out the syscall number of this syscall. These numbers are cpu architecture specific for some reason (idk, idc). We re looking these numbers up with ripgrep in /usr/include/asm/:
Since we re on x86_64 the correct value is the one in unistd_64.h, 60. Also, on x86_64 the syscall number goes into the rax cpu register, the status argument goes in the rdi register. The return value of the syscall is going to be placed in the rax register after the syscall is done, but for exit the execution is never given back to us. Let s try to write 60 into the rax register and 69 into the rdi register. To copy into registers we re going to use the mov destination, source instruction to copy from source to destination. With these registers setup we can use the syscall cpu instruction to hand execution over to the kernel. Don t worry, there s only one more assembly instruction coming and for everything else we re going to use Rust.
Our code now looks like this:
Writing Rust
Ok but even though cpu instructions can be fun at times, I d rather not deal with them most of the time (this might strike you as odd, considering this blog post). Instead let s try to define a function in Rust and call into that instead. We re going to define this function as unsafe (btw none of this is taking advantage of the safety guarantees by Rust in case it wasn t obvious. This tutorial is mostly going to stick to unsafe Rust, but for bigger projects you can attempt to reduce your usage of unsafe to opt back into normal safe Rust), it also declares the function with #[no_mangle] so the function name is preserved as main and we can call it from our global_asm entry point. Lastely, when our program is started it s going to get the stack address passed in one of the cpu registers, this value is expected to be passed to our function as an argument. Our function declares ! as return type, which means it never returns:
#[no_mangle]unsafefnmain(_stack_top:*constu8)->!// TODO: this is missing
This won t compile yet, we need to add our assembly for the exit syscall back in.
This time we re using the asm! macro, this is a slightly more declarative approach. We want to run the syscall cpu instruction with 60 in the rax register, and this time we want the rdi register to be zero, to indicate a successful exit. We also use options(noreturn) so Rust knows it should assume execution does not resume after this assembly is executed (the Linux kernel guarantees this). We modify our global_asm! entrypoint to call our new main function, and to copy the stack address from rsp into the register for the first argument rdi because it would otherwise get lost forever:
After building and disassembling this the Rust compiler is slowly starting to do work for us:
$ cargo build --release --target x86_64-unknown-none
$ objdump -d target/x86_64-unknown-none/release/hack-the-planet
target/x86_64-unknown-none/release/hack-the-planet: file format elf64-x86-64
Disassembly of section .text:
0000000000001210 <_start>:
1210: 48 89 e7 mov %rsp,%rdi
1213: e8 08 00 00 00 call 1220 <main>
1218: cc int3
1219: cc int3
121a: cc int3
121b: cc int3
121c: cc int3
121d: cc int3
121e: cc int3
121f: cc int3
0000000000001220 <main>:
1220: 50 push %rax
1221: b8 3c 00 00 00 mov $0x3c,%eax
1226: 31 ff xor %edi,%edi
1228: 0f 05 syscall
122a: 0f 0b ud2
The mov and syscall instructions are still the same, but it noticed it can XOR the rdi register with itself to set it to zero. It s using x86 assembly language (the 32 bit variant of x86_64, that also happens to work on x86_64) to do so, that s why the register is refered to as edi in the disassembly. You can also see it s inserting a bunch of 0xCC instructions (for alignment) and Rust puts the opcodes 0x0F 0x0B at the end of the function to force an invalid opcode exception so the program is guaranteed to crash in case the exit syscall doesn t do it.
This code still executes as expected:
Adding functions
Ok we re getting closer but we aren t quite there yet. Let s try to write an exit function for our assembly that we can then call like a normal function. Remember that it takes a signed 32 bit integer that s supposed to go into rdi.
Actually, since this function doesn t take any raw pointers and any i32 is valid for this syscall we re going to remove the unsafe marker of this function. When doing this we still need to use unsafe within the function for our inline assembly.
Running this still works, but interestingly the generated assembly didn t change at all:
$ cargo build --release --target x86_64-unknown-none
$ objdump -d target/x86_64-unknown-none/release/hack-the-planet
target/x86_64-unknown-none/release/hack-the-planet: file format elf64-x86-64
Disassembly of section .text:
0000000000001210 <_start>:
1210: 48 89 e7 mov %rsp,%rdi
1213: e8 08 00 00 00 call 1220 <main>
1218: cc int3
1219: cc int3
121a: cc int3
121b: cc int3
121c: cc int3
121d: cc int3
121e: cc int3
121f: cc int3
0000000000001220 <main>:
1220: 50 push %rax
1221: b8 3c 00 00 00 mov $0x3c,%eax
1226: 31 ff xor %edi,%edi
1228: 0f 05 syscall
122a: 0f 0b ud2
Rust noticed there s no need to make it a separate function at runtime and instead merged the instructions of the exit function directly into our main. It also noticed the 0 argument in exit(0) means rdi is supposed to be zero and uses the XOR optimization mentioned before.
Since main is not calling any unsafe functions anymore we could mark it as safe too, but in the next few functions we re going to deal with file descriptors and raw pointers, so this is likely the only safe function we re going to write in this tutorial so let s just keep the unsafe marker.
Printing text
Ok let s try to do a quick hello world, to do this we re going to call the write syscall. Looking it up with man 2 write:
The write syscall takes 3 arguments and returns a signed size_t. In Rust this is called isize. In C size_t is an unsigned integer type that can hold any value of sizeof(...) for the given platform, ssize_t can only store half of that because it uses one of the bits to indicate an error has occured (the first s means signed, write returns -1 in case of an error).
The arguments for write are:
the file descriptor to write to. stdout is located on file descriptor 1.
a pointer/address to some memory.
the number of bytes that should be written, starting at the given address.
Now that s a lot of stuff at once. Since this syscall is actually going to hand execution back to our program we need to let Rust know which cpu registers the syscall is writing to, so Rust doesn t attempt to use them to store data (that would be silently overwritten by the syscall). inlateout("raw") 1 => r0 means we re writing a value to the register and want the result back in variable r0. in("rdi") fd means we want to write the value of fd into the rdi register. lateout("rcx") _ means the Linux kernel may write to that register (so the previous value may get lost), but we don t want to store the value anywhere (the underscore acts as a dummy variable name).
This doesn t compile just yet though
Rust has inferred the type of r0 is isize since that s what our function returns, but the type of the input value for the register was inferred to be i32. We re going to select a specific number type to fix this.
We need to set the number of bytes we want to write explicitly because there s no concept of null-byte termination in the write system call, it s quite literally write the next X bytes, starting from this address . Our program now looks like this:
This time there are 2 syscalls, first write, then exit. For write it s setting up the 3 arguments in our cpu registers (rdi, rsi, rdx). The lea instruction subtracts 0x102b from the rip register (the instruction pointer) and places the result in the rsi register. This is effectively saying an address relative to wherever this code was loaded into memory . The instruction pointer is going to point directly behind the opcodes of the lea instruction, so 0x1238 - 0x102b = 0x20d. This address is also pointed out in the disassembly as a comment.
We don t see the string in our disassembly but we can convert our 0x20d hex to 525 in decimal and use dd to read 12 bytes from that offset, and sure enough:
$ dd bs=1 skip=525 count=12 if=target/x86_64-unknown-none/release/hack-the-planet
Hello world
12+0 records in
12+0 records out
Execute our binary with strace also shows the new write syscall (and the bytes that are being written mixed up in the output).
$ strace -f ./hack-the-planet
execve("./hack-the-planet", ["./hack-the-planet"], 0x74493abe64a8 /* 39 vars */) = 0
write(1, "Hello world\n", 12Hello world
) = 12
exit(0) = ?
+++ exited with 0 +++
After running strip on it to remove some symbols the binary is so small, if you open it in a text editor it fits on a screenshot:
I ve used hardware-backed OpenPGP keys since 2006 when I imported newly generated rsa1024 subkeys to a FSFE Fellowship card. This worked well for several years, and I recall buying more ZeitControl cards for multi-machine usage and backup purposes. As a side note, I recall being unsatisfied with the weak 1024-bit RSA subkeys at the time my primary key was a somewhat stronger 1280-bit RSA key created back in 2002 but OpenPGP cards at the time didn t support more than 1024 bit RSA, and were (and still often are) also limited to power-of-two RSA key sizes which I dislike.
I had my master key on disk with a strong password for a while, mostly to refresh expiration time of the subkeys and to sign other s OpenPGP keys. At some point I stopped carrying around encrypted copies of my master key. That was my main setup when I migrated to a new stronger RSA 3744 bit key with rsa2048 subkeys on a YubiKey NEO back in 2014. At that point, signing other s OpenPGP keys was a rare enough occurrence that I settled with bringing out my offline machine to perform this operation, transferring the public key to sign on USB sticks. In 2019 I re-evaluated my OpenPGP setup and ended up creating a offline Ed25519 key with subkeys on a FST-01G running Gnuk. My approach for signing other s OpenPGP keys were still to bring out my offline machine and sign things using the master secret using USB sticks for storage and transport. Which meant I almost never did that, because it took too much effort. So my 2019-era Ed25519 key still only has a handful of signatures on it, since I had essentially stopped signing other s keys which is the traditional way of getting signatures in return.
None of this caused any critical problem for me because I continued to use my old 2014-era RSA3744 key in parallel with my new 2019-era Ed25519 key, since too many systems didn t handle Ed25519. However, during 2022 this changed, and the only remaining environment that I still used my RSA3744 key for was in Debian and they require OpenPGP signatures on the new key to allow it to replace an older key. I was in denial about this sub-optimal solution during 2022 and endured its practical consequences, having to use the YubiKey NEO (which I had replaced with a permanently inserted YubiKey Nano at some point) for Debian-related purposes alone.
In December 2022 I bought a new laptop and setup a FST-01SZ with my Ed25519 key, and while I have taken a vacation from Debian, I continue to extend the expiration period on the old RSA3744-key in case I will ever have to use it again, so the overall OpenPGP setup was still sub-optimal. Having two valid OpenPGP keys at the same time causes people to use both for email encryption (leading me to have to use both devices), and the WKD Key Discovery protocol doesn t like two valid keys either. At FOSDEM 23 I ran into Andre Heinecke at GnuPG and I couldn t help complain about how complex and unsatisfying all OpenPGP-related matters were, and he mildly ignored my rant and asked why I didn t put the master key on another smartcard. The comment sunk in when I came home, and recently I connected all the dots and this post is a summary of what I did to move my offline OpenPGP master key to a Nitrokey Start.
First a word about device choice, I still prefer to use hardware devices that are as compatible with free software as possible, but the FST-01G or FST-01SZ are no longer easily available for purchase. I got a comment about Nitrokey start in my last post, and had two of them available to experiment with. There are things to dislike with the Nitrokey Start compared to the YubiKey (e.g., relative insecure chip architecture, the bulkier form factor and lack of FIDO/U2F/OATH support) but as far as I know there is no more widely available owner-controlled device that is manufactured for an intended purpose of implementing an OpenPGP card. Thus it hits the sweet spot for me.
The first step is to run latest firmware on the Nitrokey Start for bug-fixes and important OpenSSH 9.0 compatibility and there are reproducible-built firmware published that you can install using pynitrokey. I run Trisquel 11 aramo on my laptop, which does not include the Python Pip package (likely because it promotes installing non-free software) so that was a slight complication. Building the firmware locally may have worked, and I would like to do that eventually to confirm the published firmware, however to save time I settled with installing the Ubuntu 22.04 packages on my machine:
$ sha256sum python3-pip*
ded6b3867a4a4cbaff0940cab366975d6aeecc76b9f2d2efa3deceb062668b1c python3-pip_22.0.2+dfsg-1ubuntu0.2_all.deb
e1561575130c41dc3309023a345de337e84b4b04c21c74db57f599e267114325 python3-pip-whl_22.0.2+dfsg-1ubuntu0.2_all.deb
$ doas dpkg -i python3-pip*
...
$ doas apt install -f
...
$
Installing pynitrokey downloaded a bunch of dependencies, and it would be nice to audit the license and security vulnerabilities for each of them. (Verbose output below slightly redacted.)
jas@kaka:~$ pip3 install --user pynitrokey
Collecting pynitrokey
Downloading pynitrokey-0.4.34-py3-none-any.whl (572 kB)
Collecting frozendict~=2.3.4
Downloading frozendict-2.3.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (113 kB)
Requirement already satisfied: click<9,>=8.0.0 in /usr/lib/python3/dist-packages (from pynitrokey) (8.0.3)
Collecting ecdsa
Downloading ecdsa-0.18.0-py2.py3-none-any.whl (142 kB)
Collecting python-dateutil~=2.7.0
Downloading python_dateutil-2.7.5-py2.py3-none-any.whl (225 kB)
Collecting fido2<2,>=1.1.0
Downloading fido2-1.1.0-py3-none-any.whl (201 kB)
Collecting tlv8
Downloading tlv8-0.10.0.tar.gz (16 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: certifi>=14.5.14 in /usr/lib/python3/dist-packages (from pynitrokey) (2020.6.20)
Requirement already satisfied: pyusb in /usr/lib/python3/dist-packages (from pynitrokey) (1.2.1.post1)
Collecting urllib3~=1.26.7
Downloading urllib3-1.26.15-py2.py3-none-any.whl (140 kB)
Collecting spsdk<1.8.0,>=1.7.0
Downloading spsdk-1.7.1-py3-none-any.whl (684 kB)
Collecting typing_extensions~=4.3.0
Downloading typing_extensions-4.3.0-py3-none-any.whl (25 kB)
Requirement already satisfied: cryptography<37,>=3.4.4 in /usr/lib/python3/dist-packages (from pynitrokey) (3.4.8)
Collecting intelhex
Downloading intelhex-2.3.0-py2.py3-none-any.whl (50 kB)
Collecting nkdfu
Downloading nkdfu-0.2-py3-none-any.whl (16 kB)
Requirement already satisfied: requests in /usr/lib/python3/dist-packages (from pynitrokey) (2.25.1)
Collecting tqdm
Downloading tqdm-4.65.0-py3-none-any.whl (77 kB)
Collecting nrfutil<7,>=6.1.4
Downloading nrfutil-6.1.7.tar.gz (845 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: cffi in /usr/lib/python3/dist-packages (from pynitrokey) (1.15.0)
Collecting crcmod
Downloading crcmod-1.7.tar.gz (89 kB)
Preparing metadata (setup.py) ... done
Collecting libusb1==1.9.3
Downloading libusb1-1.9.3-py3-none-any.whl (60 kB)
Collecting pc_ble_driver_py>=0.16.4
Downloading pc_ble_driver_py-0.17.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.9 MB)
Collecting piccata
Downloading piccata-2.0.3-py3-none-any.whl (21 kB)
Collecting protobuf<4.0.0,>=3.17.3
Downloading protobuf-3.20.3-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.1 MB)
Collecting pyserial
Downloading pyserial-3.5-py2.py3-none-any.whl (90 kB)
Collecting pyspinel>=1.0.0a3
Downloading pyspinel-1.0.3.tar.gz (58 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: pyyaml in /usr/lib/python3/dist-packages (from nrfutil<7,>=6.1.4->pynitrokey) (5.4.1)
Requirement already satisfied: six>=1.5 in /usr/lib/python3/dist-packages (from python-dateutil~=2.7.0->pynitrokey) (1.16.0)
Collecting pylink-square<0.11.9,>=0.8.2
Downloading pylink_square-0.11.1-py2.py3-none-any.whl (78 kB)
Collecting jinja2<3.1,>=2.11
Downloading Jinja2-3.0.3-py3-none-any.whl (133 kB)
Collecting bincopy<17.11,>=17.10.2
Downloading bincopy-17.10.3-py3-none-any.whl (17 kB)
Collecting fastjsonschema>=2.15.1
Downloading fastjsonschema-2.16.3-py3-none-any.whl (23 kB)
Collecting astunparse<2,>=1.6
Downloading astunparse-1.6.3-py2.py3-none-any.whl (12 kB)
Collecting oscrypto~=1.2
Downloading oscrypto-1.3.0-py2.py3-none-any.whl (194 kB)
Collecting deepmerge==0.3.0
Downloading deepmerge-0.3.0-py2.py3-none-any.whl (7.6 kB)
Collecting pyocd<=0.31.0,>=0.28.3
Downloading pyocd-0.31.0-py3-none-any.whl (12.5 MB)
Collecting click-option-group<0.6,>=0.3.0
Downloading click_option_group-0.5.5-py3-none-any.whl (12 kB)
Collecting pycryptodome<4,>=3.9.3
Downloading pycryptodome-3.17-cp35-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.1 MB)
Collecting pyocd-pemicro<1.2.0,>=1.1.1
Downloading pyocd_pemicro-1.1.5-py3-none-any.whl (9.0 kB)
Requirement already satisfied: colorama<1,>=0.4.4 in /usr/lib/python3/dist-packages (from spsdk<1.8.0,>=1.7.0->pynitrokey) (0.4.4)
Collecting commentjson<1,>=0.9
Downloading commentjson-0.9.0.tar.gz (8.7 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: asn1crypto<2,>=1.2 in /usr/lib/python3/dist-packages (from spsdk<1.8.0,>=1.7.0->pynitrokey) (1.4.0)
Collecting pypemicro<0.2.0,>=0.1.9
Downloading pypemicro-0.1.11-py3-none-any.whl (5.7 MB)
Collecting libusbsio>=2.1.11
Downloading libusbsio-2.1.11-py3-none-any.whl (247 kB)
Collecting sly==0.4
Downloading sly-0.4.tar.gz (60 kB)
Preparing metadata (setup.py) ... done
Collecting ruamel.yaml<0.18.0,>=0.17
Downloading ruamel.yaml-0.17.21-py3-none-any.whl (109 kB)
Collecting cmsis-pack-manager<0.3.0
Downloading cmsis_pack_manager-0.2.10-py2.py3-none-manylinux1_x86_64.whl (25.1 MB)
Collecting click-command-tree==1.1.0
Downloading click_command_tree-1.1.0-py3-none-any.whl (3.6 kB)
Requirement already satisfied: bitstring<3.2,>=3.1 in /usr/lib/python3/dist-packages (from spsdk<1.8.0,>=1.7.0->pynitrokey) (3.1.7)
Collecting hexdump~=3.3
Downloading hexdump-3.3.zip (12 kB)
Preparing metadata (setup.py) ... done
Collecting fire
Downloading fire-0.5.0.tar.gz (88 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: wheel<1.0,>=0.23.0 in /usr/lib/python3/dist-packages (from astunparse<2,>=1.6->spsdk<1.8.0,>=1.7.0->pynitrokey) (0.37.1)
Collecting humanfriendly
Downloading humanfriendly-10.0-py2.py3-none-any.whl (86 kB)
Collecting argparse-addons>=0.4.0
Downloading argparse_addons-0.12.0-py3-none-any.whl (3.3 kB)
Collecting pyelftools
Downloading pyelftools-0.29-py2.py3-none-any.whl (174 kB)
Collecting milksnake>=0.1.2
Downloading milksnake-0.1.5-py2.py3-none-any.whl (9.6 kB)
Requirement already satisfied: appdirs>=1.4 in /usr/lib/python3/dist-packages (from cmsis-pack-manager<0.3.0->spsdk<1.8.0,>=1.7.0->pynitrokey) (1.4.4)
Collecting lark-parser<0.8.0,>=0.7.1
Downloading lark-parser-0.7.8.tar.gz (276 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: MarkupSafe>=2.0 in /usr/lib/python3/dist-packages (from jinja2<3.1,>=2.11->spsdk<1.8.0,>=1.7.0->pynitrokey) (2.0.1)
Collecting asn1crypto<2,>=1.2
Downloading asn1crypto-1.5.1-py2.py3-none-any.whl (105 kB)
Collecting wrapt
Downloading wrapt-1.15.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (78 kB)
Collecting future
Downloading future-0.18.3.tar.gz (840 kB)
Preparing metadata (setup.py) ... done
Collecting psutil>=5.2.2
Downloading psutil-5.9.4-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (280 kB)
Collecting capstone<5.0,>=4.0
Downloading capstone-4.0.2-py2.py3-none-manylinux1_x86_64.whl (2.1 MB)
Collecting naturalsort<2.0,>=1.5
Downloading naturalsort-1.5.1.tar.gz (7.4 kB)
Preparing metadata (setup.py) ... done
Collecting prettytable<3.0,>=2.0
Downloading prettytable-2.5.0-py3-none-any.whl (24 kB)
Collecting intervaltree<4.0,>=3.0.2
Downloading intervaltree-3.1.0.tar.gz (32 kB)
Preparing metadata (setup.py) ... done
Collecting ruamel.yaml.clib>=0.2.6
Downloading ruamel.yaml.clib-0.2.7-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl (485 kB)
Collecting termcolor
Downloading termcolor-2.2.0-py3-none-any.whl (6.6 kB)
Collecting sortedcontainers<3.0,>=2.0
Downloading sortedcontainers-2.4.0-py2.py3-none-any.whl (29 kB)
Requirement already satisfied: wcwidth in /usr/lib/python3/dist-packages (from prettytable<3.0,>=2.0->pyocd<=0.31.0,>=0.28.3->spsdk<1.8.0,>=1.7.0->pynitrokey) (0.2.5)
Building wheels for collected packages: nrfutil, crcmod, sly, tlv8, commentjson, hexdump, pyspinel, fire, intervaltree, lark-parser, naturalsort, future
Building wheel for nrfutil (setup.py) ... done
Created wheel for nrfutil: filename=nrfutil-6.1.7-py3-none-any.whl size=898520 sha256=de6f8803f51d6c26d24dc7df6292064a468ff3f389d73370433fde5582b84a10
Stored in directory: /home/jas/.cache/pip/wheels/39/2b/9b/98ab2dd716da746290e6728bdb557b14c1c9a54cb9ed86e13b
Building wheel for crcmod (setup.py) ... done
Created wheel for crcmod: filename=crcmod-1.7-cp310-cp310-linux_x86_64.whl size=31422 sha256=5149ac56fcbfa0606760eef5220fcedc66be560adf68cf38c604af3ad0e4a8b0
Stored in directory: /home/jas/.cache/pip/wheels/85/4c/07/72215c529bd59d67e3dac29711d7aba1b692f543c808ba9e86
Building wheel for sly (setup.py) ... done
Created wheel for sly: filename=sly-0.4-py3-none-any.whl size=27352 sha256=f614e413918de45c73d1e9a8dca61ca07dc760d9740553400efc234c891f7fde
Stored in directory: /home/jas/.cache/pip/wheels/a2/23/4a/6a84282a0d2c29f003012dc565b3126e427972e8b8157ea51f
Building wheel for tlv8 (setup.py) ... done
Created wheel for tlv8: filename=tlv8-0.10.0-py3-none-any.whl size=11266 sha256=3ec8b3c45977a3addbc66b7b99e1d81b146607c3a269502b9b5651900a0e2d08
Stored in directory: /home/jas/.cache/pip/wheels/e9/35/86/66a473cc2abb0c7f21ed39c30a3b2219b16bd2cdb4b33cfc2c
Building wheel for commentjson (setup.py) ... done
Created wheel for commentjson: filename=commentjson-0.9.0-py3-none-any.whl size=12092 sha256=28b6413132d6d7798a18cf8c76885dc69f676ea763ffcb08775a3c2c43444f4a
Stored in directory: /home/jas/.cache/pip/wheels/7d/90/23/6358a234ca5b4ec0866d447079b97fedf9883387d1d7d074e5
Building wheel for hexdump (setup.py) ... done
Created wheel for hexdump: filename=hexdump-3.3-py3-none-any.whl size=8913 sha256=79dfadd42edbc9acaeac1987464f2df4053784fff18b96408c1309b74fd09f50
Stored in directory: /home/jas/.cache/pip/wheels/26/28/f7/f47d7ecd9ae44c4457e72c8bb617ef18ab332ee2b2a1047e87
Building wheel for pyspinel (setup.py) ... done
Created wheel for pyspinel: filename=pyspinel-1.0.3-py3-none-any.whl size=65033 sha256=01dc27f81f28b4830a0cf2336dc737ef309a1287fcf33f57a8a4c5bed3b5f0a6
Stored in directory: /home/jas/.cache/pip/wheels/95/ec/4b/6e3e2ee18e7292d26a65659f75d07411a6e69158bb05507590
Building wheel for fire (setup.py) ... done
Created wheel for fire: filename=fire-0.5.0-py2.py3-none-any.whl size=116951 sha256=3d288585478c91a6914629eb739ea789828eb2d0267febc7c5390cb24ba153e8
Stored in directory: /home/jas/.cache/pip/wheels/90/d4/f7/9404e5db0116bd4d43e5666eaa3e70ab53723e1e3ea40c9a95
Building wheel for intervaltree (setup.py) ... done
Created wheel for intervaltree: filename=intervaltree-3.1.0-py2.py3-none-any.whl size=26119 sha256=5ff1def22ba883af25c90d90ef7c6518496fcd47dd2cbc53a57ec04cd60dc21d
Stored in directory: /home/jas/.cache/pip/wheels/fa/80/8c/43488a924a046b733b64de3fac99252674c892a4c3801c0a61
Building wheel for lark-parser (setup.py) ... done
Created wheel for lark-parser: filename=lark_parser-0.7.8-py2.py3-none-any.whl size=62527 sha256=3d2ec1d0f926fc2688d40777f7ef93c9986f874169132b1af590b6afc038f4be
Stored in directory: /home/jas/.cache/pip/wheels/29/30/94/33e8b58318aa05cb1842b365843036e0280af5983abb966b83
Building wheel for naturalsort (setup.py) ... done
Created wheel for naturalsort: filename=naturalsort-1.5.1-py3-none-any.whl size=7526 sha256=bdecac4a49f2416924548cae6c124c85d5333e9e61c563232678ed182969d453
Stored in directory: /home/jas/.cache/pip/wheels/a6/8e/c9/98cfa614fff2979b457fa2d9ad45ec85fa417e7e3e2e43be51
Building wheel for future (setup.py) ... done
Created wheel for future: filename=future-0.18.3-py3-none-any.whl size=492037 sha256=57a01e68feca2b5563f5f624141267f399082d2f05f55886f71b5d6e6cf2b02c
Stored in directory: /home/jas/.cache/pip/wheels/5e/a9/47/f118e66afd12240e4662752cc22cefae5d97275623aa8ef57d
Successfully built nrfutil crcmod sly tlv8 commentjson hexdump pyspinel fire intervaltree lark-parser naturalsort future
Installing collected packages: tlv8, sortedcontainers, sly, pyserial, pyelftools, piccata, naturalsort, libusb1, lark-parser, intelhex, hexdump, fastjsonschema, crcmod, asn1crypto, wrapt, urllib3, typing_extensions, tqdm, termcolor, ruamel.yaml.clib, python-dateutil, pyspinel, pypemicro, pycryptodome, psutil, protobuf, prettytable, oscrypto, milksnake, libusbsio, jinja2, intervaltree, humanfriendly, future, frozendict, fido2, ecdsa, deepmerge, commentjson, click-option-group, click-command-tree, capstone, astunparse, argparse-addons, ruamel.yaml, pyocd-pemicro, pylink-square, pc_ble_driver_py, fire, cmsis-pack-manager, bincopy, pyocd, nrfutil, nkdfu, spsdk, pynitrokey
WARNING: The script nitropy is installed in '/home/jas/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed argparse-addons-0.12.0 asn1crypto-1.5.1 astunparse-1.6.3 bincopy-17.10.3 capstone-4.0.2 click-command-tree-1.1.0 click-option-group-0.5.5 cmsis-pack-manager-0.2.10 commentjson-0.9.0 crcmod-1.7 deepmerge-0.3.0 ecdsa-0.18.0 fastjsonschema-2.16.3 fido2-1.1.0 fire-0.5.0 frozendict-2.3.5 future-0.18.3 hexdump-3.3 humanfriendly-10.0 intelhex-2.3.0 intervaltree-3.1.0 jinja2-3.0.3 lark-parser-0.7.8 libusb1-1.9.3 libusbsio-2.1.11 milksnake-0.1.5 naturalsort-1.5.1 nkdfu-0.2 nrfutil-6.1.7 oscrypto-1.3.0 pc_ble_driver_py-0.17.0 piccata-2.0.3 prettytable-2.5.0 protobuf-3.20.3 psutil-5.9.4 pycryptodome-3.17 pyelftools-0.29 pylink-square-0.11.1 pynitrokey-0.4.34 pyocd-0.31.0 pyocd-pemicro-1.1.5 pypemicro-0.1.11 pyserial-3.5 pyspinel-1.0.3 python-dateutil-2.7.5 ruamel.yaml-0.17.21 ruamel.yaml.clib-0.2.7 sly-0.4 sortedcontainers-2.4.0 spsdk-1.7.1 termcolor-2.2.0 tlv8-0.10.0 tqdm-4.65.0 typing_extensions-4.3.0 urllib3-1.26.15 wrapt-1.15.0
jas@kaka:~$
Then upgrading the device worked remarkable well, although I wish that the tool would have printed URLs and checksums for the firmware files to allow easy confirmation.
jas@kaka:~$ PATH=$PATH:/home/jas/.local/bin
jas@kaka:~$ nitropy start list
Command line tool to interact with Nitrokey devices 0.4.34
:: 'Nitrokey Start' keys:
FSIJ-1.2.15-5D271572: Nitrokey Nitrokey Start (RTM.12.1-RC2-modified)
jas@kaka:~$ nitropy start update
Command line tool to interact with Nitrokey devices 0.4.34
Nitrokey Start firmware update tool
Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35
System: Linux, is_linux: True
Python: 3.10.6
Saving run log to: /tmp/nitropy.log.gc5753a8
Admin PIN:
Firmware data to be used:
- FirmwareType.REGNUAL: 4408, hash: ...b'72a30389' valid (from ...built/RTM.13/regnual.bin)
- FirmwareType.GNUK: 129024, hash: ...b'25a4289b' valid (from ...prebuilt/RTM.13/gnuk.bin)
Currently connected device strings:
Device:
Vendor: Nitrokey
Product: Nitrokey Start
Serial: FSIJ-1.2.15-5D271572
Revision: RTM.12.1-RC2-modified
Config: *:*:8e82
Sys: 3.0
Board: NITROKEY-START-G
initial device strings: [ 'name': '', 'Vendor': 'Nitrokey', 'Product': 'Nitrokey Start', 'Serial': 'FSIJ-1.2.15-5D271572', 'Revision': 'RTM.12.1-RC2-modified', 'Config': '*:*:8e82', 'Sys': '3.0', 'Board': 'NITROKEY-START-G' ]
Please note:
- Latest firmware available is:
RTM.13 (published: 2022-12-08T10:59:11Z)
- provided firmware: None
- all data will be removed from the device!
- do not interrupt update process - the device may not run properly!
- the process should not take more than 1 minute
Do you want to continue? [yes/no]: yes
...
Starting bootloader upload procedure
Device: Nitrokey Start FSIJ-1.2.15-5D271572
Connected to the device
Running update!
Do NOT remove the device from the USB slot, until further notice
Downloading flash upgrade program...
Executing flash upgrade...
Waiting for device to appear:
Wait 20 seconds.....
Downloading the program
Protecting device
Finish flashing
Resetting device
Update procedure finished. Device could be removed from USB slot.
Currently connected device strings (after upgrade):
Device:
Vendor: Nitrokey
Product: Nitrokey Start
Serial: FSIJ-1.2.19-5D271572
Revision: RTM.13
Config: *:*:8e82
Sys: 3.0
Board: NITROKEY-START-G
device can now be safely removed from the USB slot
final device strings: [ 'name': '', 'Vendor': 'Nitrokey', 'Product': 'Nitrokey Start', 'Serial': 'FSIJ-1.2.19-5D271572', 'Revision': 'RTM.13', 'Config': '*:*:8e82', 'Sys': '3.0', 'Board': 'NITROKEY-START-G' ]
finishing session 2023-03-16 21:49:07.371291
Log saved to: /tmp/nitropy.log.gc5753a8
jas@kaka:~$
jas@kaka:~$ nitropy start list
Command line tool to interact with Nitrokey devices 0.4.34
:: 'Nitrokey Start' keys:
FSIJ-1.2.19-5D271572: Nitrokey Nitrokey Start (RTM.13)
jas@kaka:~$
Before importing the master key to this device, it should be configured. Note the commands in the beginning to make sure scdaemon/pcscd is not running because they may have cached state from earlier cards. Change PIN code as you like after this, my experience with Gnuk was that the Admin PIN had to be changed first, then you import the key, and then you change the PIN.
jas@kaka:~$ gpg-connect-agent "SCD KILLSCD" "SCD BYE" /bye
OK
ERR 67125247 Slut p fil <GPG Agent>
jas@kaka:~$ ps auxww grep -e pcsc -e scd
jas 11651 0.0 0.0 3468 1672 pts/0 R+ 21:54 0:00 grep --color=auto -e pcsc -e scd
jas@kaka:~$ gpg --card-edit
Reader ...........: 20A0:4211:FSIJ-1.2.19-5D271572:0
Application ID ...: D276000124010200FFFE5D2715720000
Application type .: OpenPGP
Version ..........: 2.0
Manufacturer .....: unmanaged S/N range
Serial number ....: 5D271572
Name of cardholder: [not set]
Language prefs ...: [not set]
Salutation .......:
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: rsa2048 rsa2048 rsa2048
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 0
KDF setting ......: off
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]
gpg/card> admin
Admin commands are allowed
gpg/card> kdf-setup
gpg/card> passwd
gpg: OpenPGP card no. D276000124010200FFFE5D2715720000 detected
1 - change PIN
2 - unblock PIN
3 - change Admin PIN
4 - set the Reset Code
Q - quit
Your selection? 3
PIN changed.
1 - change PIN
2 - unblock PIN
3 - change Admin PIN
4 - set the Reset Code
Q - quit
Your selection? q
gpg/card> name
Cardholder's surname: Josefsson
Cardholder's given name: Simon
gpg/card> lang
Language preferences: sv
gpg/card> sex
Salutation (M = Mr., F = Ms., or space): m
gpg/card> login
Login data (account name): jas
gpg/card> url
URL to retrieve public key: https://josefsson.org/key-20190320.txt
gpg/card> forcesig
gpg/card> key-attr
Changing card key attribute for: Signature key
Please select what kind of key you want:
(1) RSA
(2) ECC
Your selection? 2
Please select which elliptic curve you want:
(1) Curve 25519
(4) NIST P-384
Your selection? 1
The card will now be re-configured to generate a key of type: ed25519
Note: There is no guarantee that the card supports the requested size.
If the key generation does not succeed, please check the
documentation of your card to see what sizes are allowed.
Changing card key attribute for: Encryption key
Please select what kind of key you want:
(1) RSA
(2) ECC
Your selection? 2
Please select which elliptic curve you want:
(1) Curve 25519
(4) NIST P-384
Your selection? 1
The card will now be re-configured to generate a key of type: cv25519
Changing card key attribute for: Authentication key
Please select what kind of key you want:
(1) RSA
(2) ECC
Your selection? 2
Please select which elliptic curve you want:
(1) Curve 25519
(4) NIST P-384
Your selection? 1
The card will now be re-configured to generate a key of type: ed25519
gpg/card>
jas@kaka:~$ gpg --card-edit
Reader ...........: 20A0:4211:FSIJ-1.2.19-5D271572:0
Application ID ...: D276000124010200FFFE5D2715720000
Application type .: OpenPGP
Version ..........: 2.0
Manufacturer .....: unmanaged S/N range
Serial number ....: 5D271572
Name of cardholder: Simon Josefsson
Language prefs ...: sv
Salutation .......: Mr.
URL of public key : https://josefsson.org/key-20190320.txt
Login data .......: jas
Signature PIN ....: not forced
Key attributes ...: ed25519 cv25519 ed25519
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 0
KDF setting ......: on
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]
jas@kaka:~$
Once setup, bring out your offline machine and boot it and mount your USB stick with the offline key. The paths below will be different, and this is using a somewhat unorthodox approach of working with fresh GnuPG configuration paths that I chose for the USB stick.
jas@kaka:/media/jas/2c699cbd-b77e-4434-a0d6-0c4965864296$ cp -a gnupghome-backup-masterkey gnupghome-import-nitrokey-5D271572
jas@kaka:/media/jas/2c699cbd-b77e-4434-a0d6-0c4965864296$ gpg --homedir $PWD/gnupghome-import-nitrokey-5D271572 --edit-key B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE
gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Secret key is available.
sec ed25519/D73CF638C53C06BE
created: 2019-03-20 expired: 2019-10-22 usage: SC
trust: ultimate validity: expired
[ expired] (1). Simon Josefsson <simon@josefsson.org>
gpg> keytocard
Really move the primary key? (y/N) y
Please select where to store the key:
(1) Signature key
(3) Authentication key
Your selection? 1
sec ed25519/D73CF638C53C06BE
created: 2019-03-20 expired: 2019-10-22 usage: SC
trust: ultimate validity: expired
[ expired] (1). Simon Josefsson <simon@josefsson.org>
gpg>
Save changes? (y/N) y
jas@kaka:/media/jas/2c699cbd-b77e-4434-a0d6-0c4965864296$
At this point it is useful to confirm that the Nitrokey has the master key available and that is possible to sign statements with it, back on your regular machine:
jas@kaka:~$ gpg --card-status
Reader ...........: 20A0:4211:FSIJ-1.2.19-5D271572:0
Application ID ...: D276000124010200FFFE5D2715720000
Application type .: OpenPGP
Version ..........: 2.0
Manufacturer .....: unmanaged S/N range
Serial number ....: 5D271572
Name of cardholder: Simon Josefsson
Language prefs ...: sv
Salutation .......: Mr.
URL of public key : https://josefsson.org/key-20190320.txt
Login data .......: jas
Signature PIN ....: not forced
Key attributes ...: ed25519 cv25519 ed25519
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 1
KDF setting ......: on
Signature key ....: B1D2 BD13 75BE CB78 4CF4 F8C4 D73C F638 C53C 06BE
created ....: 2019-03-20 23:37:24
Encryption key....: [none]
Authentication key: [none]
General key info..: pub ed25519/D73CF638C53C06BE 2019-03-20 Simon Josefsson <simon@josefsson.org>
sec> ed25519/D73CF638C53C06BE created: 2019-03-20 expires: 2023-09-19
card-no: FFFE 5D271572
ssb> ed25519/80260EE8A9B92B2B created: 2019-03-20 expires: 2023-09-19
card-no: FFFE 42315277
ssb> ed25519/51722B08FE4745A2 created: 2019-03-20 expires: 2023-09-19
card-no: FFFE 42315277
ssb> cv25519/02923D7EE76EBD60 created: 2019-03-20 expires: 2023-09-19
card-no: FFFE 42315277
jas@kaka:~$ echo foo gpg -a --sign gpg --verify
gpg: Signature made Thu Mar 16 22:11:02 2023 CET
gpg: using EDDSA key B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE
gpg: Good signature from "Simon Josefsson <simon@josefsson.org>" [ultimate]
jas@kaka:~$
Finally to retrieve and sign a key, for example Andre Heinecke s that I could confirm the OpenPGP key identifier from his business card.
jas@kaka:~$ gpg --locate-external-keys aheinecke@gnupg.com
gpg: key 1FDF723CF462B6B1: public key "Andre Heinecke <aheinecke@gnupg.com>" imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: marginals needed: 3 completes needed: 1 trust model: pgp
gpg: depth: 0 valid: 2 signed: 7 trust: 0-, 0q, 0n, 0m, 0f, 2u
gpg: depth: 1 valid: 7 signed: 64 trust: 7-, 0q, 0n, 0m, 0f, 0u
gpg: next trustdb check due at 2023-05-26
pub rsa3072 2015-12-08 [SC] [expires: 2025-12-05]
94A5C9A03C2FE5CA3B095D8E1FDF723CF462B6B1
uid [ unknown] Andre Heinecke <aheinecke@gnupg.com>
sub ed25519 2017-02-13 [S]
sub ed25519 2017-02-13 [A]
sub rsa3072 2015-12-08 [E] [expires: 2025-12-05]
sub rsa3072 2015-12-08 [A] [expires: 2025-12-05]
jas@kaka:~$ gpg --edit-key "94A5C9A03C2FE5CA3B095D8E1FDF723CF462B6B1"
gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
pub rsa3072/1FDF723CF462B6B1
created: 2015-12-08 expires: 2025-12-05 usage: SC
trust: unknown validity: unknown
sub ed25519/2978E9D40CBABA5C
created: 2017-02-13 expires: never usage: S
sub ed25519/DC74D901C8E2DD47
created: 2017-02-13 expires: never usage: A
The following key was revoked on 2017-02-23 by RSA key 1FDF723CF462B6B1 Andre Heinecke <aheinecke@gnupg.com>
sub cv25519/1FFE3151683260AB
created: 2017-02-13 revoked: 2017-02-23 usage: E
sub rsa3072/8CC999BDAA45C71F
created: 2015-12-08 expires: 2025-12-05 usage: E
sub rsa3072/6304A4B539CE444A
created: 2015-12-08 expires: 2025-12-05 usage: A
[ unknown] (1). Andre Heinecke <aheinecke@gnupg.com>
gpg> sign
pub rsa3072/1FDF723CF462B6B1
created: 2015-12-08 expires: 2025-12-05 usage: SC
trust: unknown validity: unknown
Primary key fingerprint: 94A5 C9A0 3C2F E5CA 3B09 5D8E 1FDF 723C F462 B6B1
Andre Heinecke <aheinecke@gnupg.com>
This key is due to expire on 2025-12-05.
Are you sure that you want to sign this key with your
key "Simon Josefsson <simon@josefsson.org>" (D73CF638C53C06BE)
Really sign? (y/N) y
gpg> quit
Save changes? (y/N) y
jas@kaka:~$
This is on my day-to-day machine, using the NitroKey Start with the offline key. No need to boot the old offline machine just to sign keys or extend expiry anymore! At FOSDEM 23 I managed to get at least one DD signature on my new key, and the Debian keyring maintainers accepted my Ed25519 key. Hopefully I can now finally let my 2014-era RSA3744 key expire in 2023-09-19 and not extend it any further. This should finish my transition to a simpler OpenPGP key setup, yay!
The computer world has a tendency of reinventing the wheel once in a
while. I am not a fan of that process, but sometimes I just have to
bite the bullet and adapt to change. This post explains how I adapted
to one particular change: the netstat to sockstat transition.
I used to do this to show which processes where listening on which
port on a server:
netstat -anpe
It was a handy mnemonic as, in France, ANPE was the agency
responsible for the unemployed (basically). That would list all
sockets (-a), not resolve hostnames (-n, because it's slow), show
processes attached to the socket (-p) with extra info like the user
(-e). This still works, but sometimes fail to find the actual
process hooked to the port. Plus, it lists a whole bunch of UNIX
sockets and non-listening sockets, which are generally irrelevant
for such an audit.
What I really wanted to use was really something like:
netstat -pleunt sort
... which has the "pleut" mnemonic ("rains", but plural, which makes
no sense and would be badly spelled anyway). That also only lists
listening (-l) and network sockets, specifically UDP (-u) and TCP
(-t).
But enough with the legacy, let's try the brave new world of sockstat
which has the unfortunate acronymss.
The equivalent sockstat command to the above is:
ss -pleuntO
It's similar to the above, except we need the -O flag otherwise ss
does that confusing thing where it splits the output on multiple
lines. But I actually use:
ss -plunt0
... i.e. without the -e as the information it gives (cgroup, fd
number, etc) is not much more useful than what's already provided with
-p (service and UID).
All of the above also show sockets that are not actually a concern
because they only listen on localhost. Those one should be filtered
out. So now we embark into that wild filtering ride.
This is going to list all open sockets and show the port number and
service:
ss -pluntO --no-header sed 's/^\([a-z]*\) *[A-Z]* *[0-9]* [0-9]* *[0-9]* */\1/' sed 's/^[^:]*:\(:\]:\)\?//;s/\([0-9]*\) *[^ ]*/\1\t/;s/,fd=[0-9]*//' sort -gu
But that doesn't filter out the localhost stuff, lots of false
positive (like emacs, above). And this is where it gets... not fun, as
you need to match "localhost" but we don't resolve names, so you need
to do some fancy pattern matching:
Surely there must be a better way. It turns out that lsof can do
some of this, and it's relatively straightforward. This lists all
listening TCP sockets:
... which basically replaces the grep -v localhost line.
In theory, this would do the equivalent on UDP
lsof -iUDP -sUDP:^Idle
... but in reality, it looks like lsof on Linux can't figure out the
state of a UDP socket:
lsof: no UDP state names available: UDP:^Idle
... which, honestly, I'm baffled by. It's strange because sscan
figure out the state of those sockets, heck it's how -l vs -a
works after all. So we need something else to show listening UDP
sockets.
The following actually looks pretty good after all:
ss -pluO
That will list localhost sockets of course, so we can explicitly ask
ss to resolve those and filter them out with something like:
ss -plurO grep -v localhost
oh, and look here! ss supports pattern matching, so we can actually
tell it to ignore localhost directly, which removes that horrible
sed line we used earlier:
ss -pluntO '! ( src = localhost )'
That actually gives a pretty readable output. One annoyance is we
can't really modify the columns here, so we still need some god-awful
sed hacking on top of that to get a cleaner output:
ss -nplutO '! ( src = localhost )' \
sed 's/\(udp\ tcp\).*:\([0-9][0-9]*\)/\2\t\1\t/;s/\([0-9][0-9]*\t[udtcp]*\t\)[^u]*users:(("/\1/;s/".*//;s/.*Address:Port.*/Netid\tPort\tProcess/' \
sort -nu
That looks horrible and is basically impossible to memorize. But it
sure looks nice:
I would suggest that this blog post would be slightly unpleasant and I do wish that there was a way, a standardized way just like movies where you can put General, 14+, 16+, Adult and whatnot. so people could share without getting into trouble. I would suggest to consider this blog as for somewhat mature and perhaps disturbing.
Cutting off body parts
From last couple of months or so we have been getting daily reports of either men or women killed and then being chopped into pieces and this is being normalized . During my growing up years, the only such case I remember was the 1995 Tandoor case and it jolted the conscience of the nation. But it seems lot of water has passe under the bridge. as no one seems to be shocked anymore Also shocking are the number of heart attacks that young people are getting. Dunno the reason for either. Just saw this yesterday, The first thing to my mind was, at least she wasn t chopped. It was only latter I realized that the younger sister may have wanted to educate herself or have some other drreams, but because of some evil customs had to give hand in marriage. No outrage here for anything, not even child marriage :(. How have we become so insensitive. And it s mostly Hindus killing Hindus but still no outrage. We have been killing Muslims and Christians so that I guess is just par for the course :(. I wish I could say there is a solution but there seems to be not Even Child abuse cases have been going up but sad to say even they are being normalised. It s only when a US agency or somebody who feels shocked, then we feel shocked otherwise we have become numb
AMD and Lenovo Lappies
About couple of months ago I had made a blog post about lappies. Then Russel reached out to me on Twitter and we engaged. One thing lead to other and soon I saw on some other topic somewhere came across this
The above is a video presentation given by Mark Pearson. Sad to say it was not illuminating enough. Especially the whole boothole thing. I did see threeblogposts to get some more insight. The security entry did also share some news. I also reached out to Mr. Pearson to know both the status and also to enquire if there are any new lappies without an OS that I can buy from Lenovo. Sadly, both these e-mails went unanswered. Maybe they went to spam or something else, have no clue. While other organizations did work on it, Debian was kinda side-lined. Hence the annoyance from the Debian Maintainers that the whole thing came from the left field. And this doesn t just effect Debian but all those downstream distributions that rely on Debian . Now while it s almost a year since then and probably all has been fixed but there haven t been any instructions that I could find that tellls me if there is any new way or just the old way works. In any case, I do think bookworm release probably would have all the fixes needed. IIRC, we just entered soft freeze just couple of weeks back.
I have to admit something though, I have never used secure-boot as it has been designed, partially because I always run testing, irrespective of whatever device I use. And AFAIK the whole idea of Secure Boot is to have few updates unlike Testing which is kinda a rolling release thing. While Secure Boot wants same bits, all underlying bits, in Testing it s hard to ensure that as the idea is to test new releases of software and see what works and what breaks till we send it to final release (something like Bookworm ). FWIW, currently bookworm and Testing is one and the same till Bookworm releases, and then Testing would have its own updates from the next hour/day after.
Here s my (forty-first) monthly but brief update about the activities I ve done in the F/L/OSS world.
Debian
This was my 50th month of actively contributing to Debian.
I became a DM in late March 2019 and a DD on Christmas 19! \o/
There s a bunch of things I do, both, technical and non-technical. Here are the things I did this month:
libyang2 (2.1.30-2) - Adding DEP8 test for yangre.
redmine (5.0.4-3) - Add patch to stop unnecessary recursive chown ing (Fixes: #1022816, #1022817).
redmine (5.0.4-4) - Set DH_RUBY_IGNORE_TESTS to all (Fixes: #1031308).
python-jira (3.4.1-1) - New upstream version, v3.4.1.
Others
Looked up some Release team documentation.
Sponsored php-font-lib and php-dompdf-svg-lib for William.
Granted DM rights for php-dompdf.
Mentoring for newcomers.
Reviewed micro bits for Nilesh, new uploads and changes.
Ruby sprints.
Bug work (on BTS and #debian-ruby) for rails and redmine.
Moderation of -project mailing list.
A huge thanks to Freexian for sponsoring my Debian work and Entrouvert for sponsoring the Redmine backports. :D
Ubuntu
This was my 25th month of actively contributing to Ubuntu.
Now that I joined Canonical to work on Ubuntu full-time, there s a bunch of things I do! \o/
I mostly worked on different things, I guess.
I was too lazy to maintain a list of things I worked on so there s
no concrete list atm. Maybe I ll get back to this section later or
will start to list stuff from the fall, as I was doing before. :D
Debian (E)LTS
Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.
And Debian Extended LTS (ELTS) is its sister project, extending support to the stretch and jessie release (+2 years after LTS support).
This was my forty-first month as a Debian LTS and thirty-second month as a Debian ELTS paid contributor.
I worked for 24.25 hours for LTS and 28.50 hours for ELTS.
LTS CVE Fixes and Announcements:
Fixed CVE-2022-47016 for tmux and uploaded to buster via 2.8-3+deb10u1.
But decided to not roll the DLA for the package as the CVE got rejected upstream.
Worked on ruby-rails-html-sanitize and added notes to the security-tracker.
TL;DR: we need newer methods in ruby-loofah to make the patches for ruby-rails-html-sanitize backportable.
Started to look at other set of packages meanwhile.
ELTS CVE Fixes and Announcements:
Issued ELA 813-1, fixing CVE-2017-12618 and CVE-2022-25147, for apr-util.
For Debian 8 jessie, these problems have been fixed in version 1.5.4-1+deb8u1.
For Debian 9 stretch, these problems have been fixed in version 1.5.4-3+deb9u1.
Issued ELA 815-1, fixing CVE-2022-44792 and CVE-2022-44793, for net-snmp.
For Debian 8 jessie, these problems have been fixed in version 5.7.2.1+dfsg-1+deb8u6.
For Debian 9 stretch, these problems have been fixed in version 5.7.3+dfsg-1.7+deb9u5.
Helped facilitate RabbitMQ s update queries by one of our customers.
Started to look at other set of packages meanwhile.
in the Debian Perl
Group we are maintaining a lot of packages (around 4000 at the time of
writing). this also means that we are spending some time on improving our
tools which allow us to handle this amount of packages in a reasonable time.
many of the tools are shipped in the pkg-perl-tools package
since 2013, & lots of them are scripts which are called as subcommands
of the dpt(1) wrapper script.
in the last years I got the impression that not all team members are aware
of all the useful tools, & that some more promotion might be called for.
& last week I was in the mood for creating a short demo video to
showcase how I use some dpt(1) subcommands when updating a
package to a new upstream release. (even though I prefer text over videos
myself :))
probably not a cinematographic masterpiece but as the feedback of a few
viewers has been positive, I'm posting it here as well:
(direct
link as planets ignore iframes )
I have mentioned several times in this blog, as well as by other
communication means, that I am very happy with the laptop I bought
(used) about a year and a half ago: an ARM-based Lenovo Yoga
C630.
Yes, I knew from the very beginning that using this laptop would pose
a challenge to me in many ways, as full hardware support for ARM
laptops are nowhere as easy as for plain boring x86 systems. But the
advantages far outweigh the inconvenience (i.e. the hoops I had to
jump through to handle video-out when I started teaching
presentially,
which are fortunately a thing of the past now).
Anyway This post is not about my laptop.
Back in 2018, I was honored to be appointed as a member of the Debian
Technical
Committee. Of
course, that meant (due to the very clear and clever point 6.2.7.1 of
the Debian
Constitution that
my tenure in the Committee (as well as Niko Tyni s) finished in
January 1, 2023. We were invited to take part of a Jitsi call as a
last meeting, as well as to welcome Matthew Garrett to the Committee.
Of course, I arranged so I would be calling from my desktop system at
work (for which I have an old, terrible webcam but as long as I
don t need to control screen sharing too finely, mostly works). Out of
eight people in the call, two had complete or quite crippling failures
with their multimedia setup, and one had a frozen image (at least as
far as I could tell).
So Yes, Debian is indeed good and easy and simple and reliable for
most nontechnical users using standard tools.
But I guess that we power users enjoy tweaking our setup to our
precise particular liking. Or that we just don t care about
frivolities such as having a working multimedia setup.
Or I don t know what happens.
But the fact that close to half of the Technical Committee, which
should consist of Debian Developers who know their way around
technical obstacles, cannot get a working multimedia setup for a
simple, easy WebRTC call (even after a pandemic that made us all work
via teleconferencing solutions on a daily basis!) is just Beautiful
It's been a year since I started exploring HLedger, and I'm still
going. The rollover to 2023 was an opportunity to revisit my approach.
Some time ago I stumbled across Dmitry Astapov's HLedger notes (fully-fledged
hledger, which I briefly
mentioned in eventual consistency) and decided to adopt some of its ideas.
new year, new journal
First up, Astapov encourages starting a new journal file for a new calendar
year. I do this for other, accounting-adjacent files as a matter of course,
and I did it for my GNUCash files prior to adopting HLedger. But the reason
for those is a general suspicion that a simple mistake with those softwares
could irrevocably corrupt my data. I'm much more confident with HLedger, so
rolling over at years end isn't necessary for that. But there are other
advantages. A quick obvious one is you can get rid of old accounts (such as
expense accounts tied to a particular project, now completed).
one journal per import
In the first year, I periodically imported account data via CSV exports
of transactions and HLedger's (excellent) CSV import system. I imported
all the transactions, once each, into a single, large journal file.
Astapov instead advocates for creating a separate journal
for each CSV that you wish to import, and keep around the CSV, leaving you
with a 1:1 mapping of CSV:journal. Then use HLedger's "include" mechanism to
pull them all into the main journal.
With the former approach, where the CSV data was imported precisely, once, it
was only exposed to your import rules once. The workflow ended up being:
import transactions; notice some that you could have matched with import rules
and auto-coded; write the rule for the next time. With Astapov's approach, you
can re-generate the journal from the CSV at any point in the future with an
updated set of import rules.
tracking dependencies
Now we get onto the job of driving the generation of all these derivative
journal files. Astapov has built a sophisticated system using Haskell's "Shake",
which I'm not yet familiar, but for my sins I'm quite adept at (GNU-flavoured)
UNIX Make, so I started building with that. An example rule
This captures the dependency between the journal and the underlying CSV
but also to the relevant rules file; if I modify that, and this target
is run in the future, all dependent journals should be re-generated.1
opening balances
It's all fine and well starting over in a new year, and I might be generous
to forgive debts, but I can't count on others to do the same. We need
to carry over some balance information from one year to the next. Astapov has
a more complex (or perhaps featureful) scheme for this involving a custom
Haskell program, but I bodged something with a pair of make targets:
I think this could be golfed into a year-generic rule with a little more work.
The nice thing about this approach is the opening balances for a given year
might change, if adjustments are made in prior years. They shouldn't, for
real accounts, but very well could for more "virtual" liabilities. (including:
deciding to write off debts.)
run lots of reports
Astapov advocates for running lots of reports, and automatically. There's a
really obvious advantage of that to me: there's no chance anyone except me
will actually interact with HLedger itself. For family finances, I need
reports to be able to discuss anything with my wife.
Extending my make rules to run reports is trivial. I've gone for HTML
reports for the most part, as they're the easiest on the eye. Unfortunately
the most useful report to discuss (at least at the moment) would be a list
of transactions in a given expense category, and the register/aregister
commands did not support HTML as an output format. I submitted my first
HLedger patch to add HTML output support to aregister:
https://github.com/simonmichael/hledger/pull/2000
addressing the virtual posting problem
I wrote in my original hledger blog post that I had to resort to
unbalanced virtual postings in order to record both a liability between
my personal cash and family, as well as categorise the spend. I still
haven't found a nice way around that.
But I suspect having broken out the journal into lots of other journals
paves the way to a better solution to the above.
The form of a solution I am thinking of is: some scheme whereby the two
destination accounts are combined together; perhaps, choose one as a primary
and encode the other information in sub-accounts under that. For example,
repeating the example from my hledger blog post:
(I note this is very similar to a solution proposed to me by someone
responding on twitter).
The next step is to recognise that sometimes when looking at the data I
care about one aspect, and at other times the other, but rarely both. So
for the case where I'm thinking about family finances, I could use
account aliases
to effectively flatten out the expense category portion and ignore it.
On the other hand, when I'm concerned about how I've spent my personal
cash and not about how much I owe the family account, I could use
aliases to do the opposite: rewrite-away the family:liabilities:jon
prefix and combine the transactions with the regular jon:expenses
account heirarchy.
(this is all speculative: I need to actually try this.)
catching errors after an import
When I import the transactions for a given real bank account, I check the
final balance against another source: usually a bank statement, to make
sure they agree. I wasn't using any of the myriad methods to make sure
that this remains true later on, and so there was the risk that I make an
edit to something and accidentally remove a transaction that contributed
to that number, and not notice (until the next import).
The CSV data my bank gives me for accounts (not for credit cards) also includes
a 'resulting balance' field. It was therefore trivial to extend the CSV import
rules to add balance
assertions to
the transactions that are generated. This catches the problem.
There are a couple of warts with balance assertions on every such
transaction: for example, dealing with the duplicate transaction for paying
a credit card: one from the bank statement, one from the credit card.
Removing one of the two is sufficient to correct the account balances but
sometimes they don't agree on the transaction date, or the transactions
within a given day are sorted slightly differently by HLedger than by the
bank. The simple solution is to just manually delete one or two assertions:
there remain plenty more for assurance.
going forward
I've only scratched the surface of the suggestions in Astapov's "full fledged
HLedger" notes. I'm up to step 2 of 14. I'm expecting to return to it once
the changes I've made have bedded in a little bit.
I suppose I could anonymize and share the framework (Makefile etc) that I am
using if anyone was interested. It would take some work, though, so I don't know
when I'd get around to it.
the rm latest bit is to clear up some state-tracking files that HLedger writes to avoid importing duplicate transactions. In this case, I know better than HLedger.
This post is very late, but better late than never! I want to take a look back at the work that was done on FreedomBox during 2022.
Several apps were added to FreedomBox in 2022. The email server app (that was developed by a Google Summer of Code student back in 2021) was finally made available to the general audience of FreedomBox users. You will find it under the name Postfix/Dovecot , which are the main services configured by this app.
Another app that was added is Janus, which has the description video room . It is called video room instead of video conference because the room itself is persistent. People can join the room or leave, but there isn t a concept of calling or ending the call . Actually, Janus is a lightweight WebRTC server that can be used as a backend for many different types of applications. But as implemented currently, there is just the simple video room app. In the future, more advanced apps such as Jangouts may be packaged in Debian and made available to FreedomBox.
RSS-Bridge is an app that generates RSS feeds for websites that don t provide their own (for example, YouTube). It can be used together with any RSS news feed reader application, such as TT-RSS which is also available in FreedomBox.
There is now a Privacy page in the System menu, which allows enabling or disabling the Debian popularity-contest tool. If enabled, it reports the Debian packages that are installed on the system. The results can be seen at https://popcon.debian.org, which currently shows over 400 FreedomBoxes are reporting data.
A major feature added to FreedomBox in 2022 is the ability to uninstall apps. This feature is still considered experimental (it won t work for every app), but many issues have been fixed already. There is an option to take a backup of the app s data before uninstalling. There is also now an operations queue in case the user starts multiple install or uninstall operations concurrently.
XEP-0363 (HTTP File Upload) has been enabled for Ejabberd Chat Server. This allows files to be transferred between XMPP clients that support this feature.
There were a number of security improvements to FreedomBox, such as adding fail2ban jails for Dovecot, Matrix Synapse, and WordPress. Firewall rules were added to ensure that authentication and authorization for services proxied through Apache web server cannot be bypassed by programs running locally on the system. Also, we are no longer using libpam-tmpdir to provide temporary folder isolation, because it causes issues for several packages. Instead we use systemd s sandboxing features, which provide even better isolation for services.
Some things were removed in 2022. The ez-ipupdate package is no longer used for Dynamic DNS, since it is replaced by a Python implementation of GnuDIP. An option to restrict who can log in to the system was removed, due to various issues that arose from it. Instead there is an option to restrict who can login through SSH. The DNSSEC diagnostic test was removed, because it caused confusion for many users (although use of DNSSEC is still recommended).
Finally, some statistics. There were 31 releases in 2022 (including point releases). There were 68 unique contributors to the git repository; this includes code contributions and translations (but not contributions to the manual pages). In total, there were 980 commits to the git repository.
At CentOS Connect yesterday, Jack Aboutboul and Javier Hernandez
presented a talk about AlmaLinux and SBOMs
[video],
where they are exploring a novel supply-chain security effort in the
RHEL ecosystem.
Now, I have unfortunately ignored the Red Hat ecosystem for a long
time, so if you are in a similar position to me: CentOS used to
produce debranded rebuilds of RHEL; but Red Hat changed the project
round so that CentOS Stream now sits in between Fedora Rawhide and
RHEL releases, allowing the wider community to try out/contribute to
RHEL builds before their release. This is credited with making early
RHEL point releases more stable, but left a gap in the market for
debranded rebuilds of RHEL; AlmaLinux and Rocky Linux are two
distributions that aim to fill that gap.
Alma are generating and publishing Software Bill of Material (SBOM)
files for every package; these are becoming a requirement for all
software sold to the US federal government. What s more, they are
sending these SBOMs to a third party (CodeNotary) who store them in
some sort of Merkle tree system
to make it difficult for people to tamper with later. This should
theoretically allow end users of the distribution to verify the supply
chain of the packages they have installed?
I am currently unclear on the differences between CodeNotary/ImmuDB
vs. Sigstore/Rekor, but there s an SBOM devroom at FOSDEM tomorrow so
maybe I ll soon be learning that. This also makes me wonder if a
Sigstore-based approach would be more likely to be adopted by
Fedora/CentOS/RHEL, and whether someone should start a CentOS Software
Supply Chain Security SIG to figure this out, or whether such an
effort would need to live with the build system team to be properly
integrated. It would be nice to understand the supply-chain story for
CentOS and RHEL.
As I write this, I m also reflecting that perhaps it would be helpful
to explain what happens next in the SBOM consumption process; i.e. can
this effort demonstrate tangible end user value, like enabling
AlmaLinux to integrate with a vendor-neutral approach to vulnerability
management? Aside from the value of being able to sell it to the US
government!
Another busy week!
In the snap world, I have been busy trying to solve the problem of core20 snaps needing security updates and focal is no longer supported in KDE Neon. So I have created a ppa at https://launchpad.net/~scarlettmoore/+archive/ubuntu/kf5-5.99-focal-updates/+packages
Which of course presents more work, as kf5 5.99.0 requires qt5 5.15.7. Sooo this is a WIP.
Snapcraft kde-neon-extension is moving along as I learn the python ways of formatting, and fixing some issues in my tests.
In the Debian world, I am sad to report Mycroft-AI has gone bust, however the packaging efforts are not in vain as the project has been forked to https://github.com/orgs/OpenVoiceOS/repositories and should be relatively easy to migrate.
I have spent some time verifying the libappimage in buster is NOT vulnerable with CVE-2020-25265 as the code wasn t introduced yet.
Skanpage, plasma-bigscreen both have source uploads so the can migrate to testing to hopefully make it into bookworm!
As many of you know, I am seeking employment. I am a hard worker, that thrives on learning new things. I am a self starter, knowledge sponge, and eager to be an asset to < insert your company here > ! Meanwhile, as interview processes are much longer than I remember and the industry exploding in layoffs, I am coming up short on living expenses as my unemployment lingers on. Please consider donating to my gofundme. Thank you for your consideration.I still have a ways to go to cover my bills this month, I will continue with my work until I cannot, I hate asking, but please consider a donation. Thank you!GoFundMe
It has been a very busy few weeks as we endured snowstorm after snowstorm!
I have made some progress on the Mycroft in debian adventure! This will slow down as we enter freeze for bookworm and there is no way we will make it into bookworm as there are some significant issues to solve.
lingua-franco uploaded and accepted
pako uploaded and accepted
speechpy-fast uploaded
fitipy ready to upload
On the KDE side of things:
Plasma-bigscreen uploaded and accepted
skanpage uploaded and in NEW
In the Snap arena, I have made my first significant contribution to snapcraft upstream! It has been a great learning experience as I convert my Ruby knowledge to Python. Formatting is something I need to get used to!
https://github.com/snapcore/snapcraft/pull/4023
Snaps have been on hold due to the kde-neon extension not having core22 support and the above pull request fixes that. Meanwhile, I have been working on getting core20 apps ( 22.08.3 final KDE apps version for this base. ) rebuilt for security updates.
As many of you know, I am seeking employment. I am a hard worker, that thrives on learning new things. I am a self starter, knowledge sponge, and eager to be an asset to < insert your company here > !
Meanwhile, as interview processes are much longer than I remember and the industry exploding in layoffs, I am coming up short on living expenses as my unemployment lingers on. Please consider donating to my gofundme. Thank you for your consideration.
GoFundMe
The local government here has all the schools use an iCalendar feed for things like when school terms start and stop and other school events occur. The department s website also has events like public holidays. The issue is that all of them don t make it an all-day event but one that happens at midnight, or one past midnight.
The events synchronise fine, though Google s calendar is known for synchronising when it feels like it, not at any particular time you would like it to.
Even though a public holiday is all day, they are sent as appointments for midnight.
That means on my phone all the events are these tiny bars that appear right up the top of the screen and are easily missed, especially when the focus of the calendar is during the day.
On the phone, you can see the tiny purple bar at midnight. This is how the events appear. It s not the calendar s fault, as far as it knows the school events are happening at midnight.
You can also see Lunar New Year and Australia Day appear in the all-day part of the calendar and don t scroll away. That s where these events should be.
Why are all the events appearing at midnight? The reason is the feed is incorrectly set up and has the time. The events are sent in an iCalendar format and a typical event looks like this:
BEGIN:VEVENT
DTSTART;TZID=Australia/Sydney:20230206T000000
DTEND;TZID=Australia/Sydney:20230206T000000
SUMMARY:School Term starts
END:VEVENT
The event starting and stopping date and time are the DTSTART and DTEND lines. Both of them have the date of 2023/02/06 or 6th February 2023 and a time of 00:00:00 or midnight. So the calendar is doing the right thing, we need to fix the feed!
The Fix
I wrote a quick and dirty PHP script to download the feed from the real site, change the DTSTART and DTEND lines to all-day events and leave the rest of it alone.
It s pretty quick and nasty but gets the job done. So what is it doing?
Lines 2-10: Check the given variable s and match it to either site1 or site2 to obtain the URL. If you only had one site to fix you could just set the REMOTE_URL variable.
Lines 12-15: A typical fopen() and nasty error handling.
Line 16: set the content type to a calendar.
Line 17: A while loop to read the contents of the remote site line by line.
Line 18-21: This is where the magic happens, preg_replace is a Perl regular expression replacement. The PCRE is:
Finding lines starting with DTSTART or DTEND and store it in capture 1
Skip everything that isn t a colon. This is the timezone information. I wasn t sure if it was needed and how to combine it so I took it out. All the all-day events I saw don t have a time zone.
Find 8 numerics (this is for YYYYMMDD) and store it in capture 2.
Scan the Time part, a literal T then HHMMSS. Some sites use midnight some use one minute past, so it covers both.
Replace the line with either DTSTART or DTEND (capture 1), set the value type to DATE as the default is date/time and print the date (capture 2).
Line 22: Print either the modified or original line.
You need to save the script on your web server somewhere, possibly with an alias command.
The whole point of this is to change the type from a date/time to a date-only event and only print the date part of it for the start and end of it. The resulting iCalendar event looks like this:
BEGIN:VEVENT
DTSTART;VALUE=DATE:20230206
DTEND;VALUE=DATE:20230206
SUMMARY:School Term starts
END:VEVENT
The calendar then shows it properly as an all-day event. I would check the script works before doing the next step. You can use things like curl or wget to download it. If you use a normal browser, it will probably just download the translated file.
If you re not seeing the right thing then it s probably the PCRE failing. You can check it online with a regex checker such as https://regex101.com. The site has saved my PCRE and match so you got something to start with.
Calendar settings
The last thing to do is to change the URL in your calendar settings. Each calendar system has a different way of doing it. For Google Calendar they provide instructions and you want to follow the section titled Use a link to add a public Calendar .
The URL here is not the actual site s URL (which you would have put into the REMOTE_URL variable before) but the URL of your script plus the ?s=site1 part. So if you put your script aliased to /myical.php and the site ID was site1 and your website is www.example.com the URL would be https://www.example.com/myical.php?s=site1 .
You should then see the events appear as all-day events on your calendar.
First up is Minidebconf Tamilnadu 2023 that would be held on 28-29 January 2023. You can find rest of the details here. I do hope we get to see/hear some good stuff from the Minidebconf. Best of luck to all those who are applying.
Tinnitus
During the lock-down of March 2020, I became aware of noise in ears and subsequently major hearing loss. It took me quite a while to know that Tinnitus happens to both those who have hearing loss as well as not. I keep running into threads like this and as shared by someone nobody knows what really causes it. I did try some of the apps (an app. called Resound on Android) that is supposed to tackle Tinnitus but it hasn t helped much. There is this but at least for me, right now pretty speculative. Also this, and again highly speculative.
Cooking
After mum passed away, I haven t cooked anything. This used to give me pleasure but now just doesn t feel right. Cooking is something you enjoy when you are doing for somebody else and not just for yourself, at least that s how I feel and with that the curiosity to know more recipes. I do wanna buy a wok at sometime but when, how I just don t know.
Books
Have been reading books quite a bit. And due to that had to again revisit and understand ISBN. Perhaps I might have shared it before. It really is something, the history of ISBN. And that co-relates with the book I read, Raising Steam by Terry Pratchett. Raising Steam is the 40th Book in the Discworld Series and it basically romanticizes and reminisces how the idea of an engine was born, and then a steam engine and how actually Railways started. There has been a lot of history and experiences from the early years of Steam Railway that have been taken and transplanted into the book. Also how Railways is and can be successful if only it is invested wisely and maintenance is done. This is only where imagination and reality come apart as maintenance isn t done and then you have issues. While this is and was in the UK, similar situation exists in India and many other places around the world and doesn t matter whether it is private or public. Exceptions are German, French but then that maybe due to Labor movements that happened and were successful unlike in other places. I could go on but then it will become a different article in itself. Suffice to say there is much to learn and you need serious people to look after it. Both in UK and India we lack that. And not just in Railways but Civil Aviation too, but again that is a story in itself.
Web-series
Apart from books, have been seeing web-series that Willow is a good one that I enjoyed even though I hadn t seen the earlier movie. While there has been a flurry of movies and web-series both due to end of year and beginning of 2023 and yet have tried to be a bit partial on what I wanna watch or not. If it has crime, fantasy, drama then usually I like it. For e.g. I saw Blackout and pretty much was engrossed in what will happen next. It also does lead you to ask questions about centralization vs de-centralization for both power and other utilities and does make a case for communities to have their utilities apart from the grid as a fallback. How do we do over decades or centuries about it is a different question perhaps altogether. There were two books that kinda stood out for me, the first was Ian Rankin s Naming of the Dead . The book is about a cynical John Rebus, a man after my own heart. I am probably going to buy a few more of his series. In a way it also tells you why UK is the way it is right now. Another book that I liked was Shades of Grey by Jasper Fforde. This is one of the books that Mum would have clearly liked. It is pretty unusual while at the same time very close to 1984 and other such dystopian novels. The main trope of the book is what color you can see and how much you can see. The main character is somebody who can see Red, around the age of 20. One of the interesting aspects of the book is de-facting which closely resembles the Post-Truth world where alternative facts can be made out of air and they don t need any scientific evidence to back them up. In Jasper s world, they don t care about how things work and most of the technology is banned and curiosity is considered harmful and those who show that are murdered one way or the other. Interestingly, the author has just last year decided to start book 2 in the 3 book series that is supposed to be. This also tells why the U.S. is such a precarious situation in a way. A part of it is also due to the media which is in hands of chosen few, the same goes for UK and India, almost an oligopoly.
The Great Escape
This is also a book but also about experiences of people, not in 19th-20th century but today that tells you slavery is alive and well and human-trafficking as well. This piece from NPR tells you about an MNC and Indian workers. What I found interesting is that there barely is an mention of the Indian Embassy that is supposed to help Indian people. I do know for a fact that the embassies of India has seen a drastic shortage of both people and materials even since the new Govt. came in place that was nine years ago. Incidentally, BBC shared about the Gujarat riots 2002 and that has been censored in India. They keep quiet about the UK Govt. who did find out that the Chief Minister was directly responsible for the killings and in facts his number 2, Amit Shah had shared that we would do 2002 again in the election cycle barely a month ago. But sadly, no hate speech FIR or any action was taken against Mr. Shah. There have been attempts by people to showcase the documentary. For e.g. JNU tried it and the rowdies from ABVP (arm of BJP) created violence. Even the questions that has been asked by the Wire, GOI will not acknowledge them.
Interestingly, all India s edtechs have taken a beating in the last 6-8 months including the biggest BJYU s. Sharing a story from 2021 where things were best and today all of them are at bottom. In fact, the public has been wary as the prices of the courses has kept on increasing and most case studies have been found to be fake. Also the general outlook on jobs and growth has been pessimistic. In fact, most companies have been shedding jobs truckloads, most in the I.T. sector but other sectors as well. Hospitality and other related sectors have taken a huge beating, part of it post-pandemic, part of it Govt s refusal to either spend money or do any positive policies for either infrastructure, education, medical, you name it, they think private sector has all the answers which has been proven to be wrong again and again. I did not want to end on a discordant note but things are the way they are
This is another Amazon collection of short fiction, this time mostly at
novelette length. (The longer ones might creep into novella.) As before,
each one is available separately for purchase or Amazon Prime "borrowing,"
with separate ISBNs. The sidebar cover is for the first in the sequence.
(At some point I need to update my page templates so that I can add
multiple covers.)
N.K. Jemisin's "Emergency Skin" won the 2020 Hugo Award for Best
Novelette, so I wanted to read and review it, but it would be too short
for a standalone review. I therefore decided to read the whole collection
and review it as an anthology.
This was a mistake. Learn from my mistake.
The overall theme of the collection is technological advance, rapid
change, and the ethical and social question of whether we should slow
technology because of social risk. Some of the stories stick to that
theme more closely than others. Jemisin's story mostly ignores it, which
was probably the right decision.
"Ark" by Veronica Roth: A planet-killing asteroid has been on
its inexorable way towards Earth for decades. Most of the planet has been
evacuated. A small group has stayed behind, cataloging samples and
filling two remaining ships with as much biodiversity as they can find
with the intent to leave at the last minute. Against that backdrop, two
of that team bond over orchids.
If you were going "wait, what?" about the successful evacuation of Earth,
yeah, me too. No hint is offered as to how this was accomplished. Also,
the entirety of humanity abandoned mutual hostility and national borders
to cooperate in the face of the incoming disaster, which is, uh, bizarrely
optimistic for an otherwise gloomy story.
I should be careful about how negative I am about this story because I am
sure it will be someone's favorite. I can even write part of the positive
review: an elegiac look at loss, choices, and the meaning of a life, a
moving look at how people cope with despair. The writing is fine, the
story structure works; it's not a bad story. I just found it monumentally
depressing, and was not engrossed by the emotionally abused protagonist's
unresolved father issues. I can imagine a story around the same facts and
plot that I would have liked much better, but all of these people need
therapy and better coping mechanisms.
I'm also not sure what this had to do with the theme, given that the
incoming asteroid is random chance and has nothing to do with
technological development. (4)
"Summer Frost" by Blake Crouch: The best part of this story is
the introductory sequence before the reader knows what's going on, which
is full of evocative descriptions. I'm about to spoil what is going on,
so if you want to enjoy that untainted by the stupidity of the rest of the
plot, skip the rest of this story review.
We're going to have a glut of stories about the weird and obsessive form
of AI risk invented by the fevered imaginations of the "rationalist"
community, aren't we. I don't know why I didn't predict that. It's going
to be just as annoying as the glut of cyberpunk novels written by people
who don't understand computers.
Crouch lost me as soon as the setup is revealed. Even if I believed that
a game company would use a deep learning AI still in learning mode
to run an NPC (I don't; see
Microsoft's Tay for an obvious reason why not), or that such an NPC
would spontaneously start testing the boundaries of the game world (this
is not how deep learning works), Crouch asks the reader to believe that
this AI started as a fully scripted NPC in the prologue with a
fixed storyline. In other words, the foundation of the story is that this
game company used an AI model capable of becoming a general intelligence
for barely more than a cut scene.
This is not how anything works.
The rest of the story is yet another variation on a science fiction plot
so old and threadbare that Isaac Asimov invented the Three Laws of
Robotics to avoid telling more versions of it. Crouch's contribution is
to dress it up in the terminology of the excessively online. (The middle
of the story features a detailed discussion of
Roko's basilisk;
if you recognize that, you know what you're in for.) Asimov would not
have had a lesbian protagonist, so points for progress I guess, but the AI
becomes more interesting to the protagonist than her wife and kid because
of course it does. There are a few twists and turns along the way, but
the destination is the bog-standard hard-takeoff general intelligence
scenario.
One more pet peeve: Authors, stop trying to illustrate the growth of your
AI by having it move from broken to fluent English. English grammar is so
much easier than self-awareness or the Turing test that we had programs
that could critique your grammar decades before we had believable
chatbots. It's going to get grammar right long before the content of the
words makes any sense. Also, your AI doesn't sound dumber, your AI sounds
like someone whose native language doesn't use pronouns and helper verbs
the way that English does, and your decision to use that as a marker for
intelligence is, uh, maybe something you should think about. (3)
"Emergency Skin" by N.K. Jemisin: The protagonist is a
heavily-augmented cyborg from a colony of Earth's diaspora. The founders
of that colony fled Earth when it became obvious to them that the planet
was dying. They have survived in another star system, but they need a
specific piece of technology from the dead remnants of Earth. The
protagonist has been sent to retrieve it.
The twist is that this story is told in the second-person perspective by
the protagonist's ride-along AI, created from a consensus model of the
rulers of the colony. We never see directly what the protagonist is doing
or thinking, only the AI reaction to it. This is exactly the sort of
gimmick that works much better in short fiction than at novel length.
Jemisin uses it to create tension between the reader and the narrator, and
I thoroughly enjoyed the effect. (As shown in the
Broken Earth trilogy, Jemisin is one of the few
writers who can use second-person effectively.)
I won't spoil the revelation, but it's barbed and biting and vicious and I
loved it. Jemisin does deliver the point with a sledgehammer, so be aware
of that if you want subtlety in your short fiction, but I prefer the
bluntness. (This is part of why I usually don't get along with literary
short stories.) The reader of course can't change the direction of the
story, but the second-person perspective still provides a hit of vicarious
satisfaction. I can see why this won the Hugo; it's worth seeking out.
(8)
"You Have Arrived at Your Destination" by Amor Towles: Sam and
his wife are having a child, and they've decided to provide him with an
early boost in life. Vitek is a fertility lab, but more than that, it can
do some gene tweaking and adjustment to push a child more towards one
personality or another. Sam and his wife have spent hours filling out
profiles, and his wife spent hours weeding possible choices down to three.
Now, Sam has come to Vitek to pick from the remaining options.
Speaking of literary short stories, Towles is the non-SFF writer of this
bunch, and it's immediately obvious. The story requires the SFnal
premise, but after that this is a character piece. Vitek is an elite,
expensive company with a condescending and overly-reductive attitude
towards humanity, which is entirely intentional on the author's part.
This is the sort of story that gets resolved in an unexpected conversation
in a roadside bar, and where most of the conflict happens inside the
protagonist's head.
I was initially going to complain that Towles does the standard literary
thing of leaving off the denouement on the grounds that the reader can
figure it out, but when I did a bit of re-reading for this review, I found
more of the bones than I had noticed the first time. There's enough
subtlety that I had to think for a bit and re-read a passage, but not too
much. It's also the most thoughtful treatment of the theme of the
collection, the only one that I thought truly wrestled with the weird
interactions between technological capability and human foresight. Next
to "Emergency Skin," this was the best story of the collection. (7)
"The Last Conversation" by Paul Tremblay: A man wakes up in a
dark room, in considerable pain, not remembering anything about his life.
His only contact with the world at first is a voice: a woman who is
helping him recover his strength and his memory. The numbers that head
the chapters have significant gaps, representing days left out of the
story, as he pieces together what has happened alongside the reader.
Tremblay is the horror writer of the collection, so predictably this is
the story whose craft I can admire without really liking it. In this
case, the horror comes mostly from the pacing of revelation, created by
the choice of point of view. (This would be a much different story from
the perspective of the woman.) It's well-done, but it has the tendency
I've noticed in other horror stories of being a tightly closed system. I
see where the connection to the theme is, but it's entirely in the
setting, not in the shape of the story.
Not my thing, but I can see why it might be someone else's. (5)
"Randomize" by Andy Weir: Gah, this was so bad.
First, and somewhat expectedly, it's a clunky throwback to a 1950s-style
hard SF puzzle story. The writing is atrocious: wooden, awkward, cliched,
and full of gratuitous infodumping. The characterization is almost
entirely through broad stereotypes; the lone exception is the female
character, who at least adds an interesting twist despite being forced to
act like an idiot because of the plot. It's a very old-school type of
single-twist story, but the ending is completely implausible and falls
apart if you breathe on it too hard.
Weir is something of a throwback to an earlier era of scientific puzzle
stories, though, so maybe one is inclined to give him a break on the
writing quality. (I am not; one of the ways in which science fiction has
improved is that you can get good scientific puzzles and good
writing these days.) But the science is also so bad that I was literally
facepalming while reading it.
The premise of this story is that quantum computers are commercially
available. That will cause a serious problem for Las Vegas casinos,
because the generator for keno
numbers is vulnerable to quantum algorithms. The solution proposed by the
IT person for the casino? A quantum random number generator. (The words
"fight quantum with quantum" appear literally in the text if you're
wondering how bad the writing is.)
You could convince me that an ancient keno system is using a pseudorandom
number generator that might be vulnerable to some quantum algorithm and
doesn't get reseeded often enough. Fine. And yes, quantum computers can
be used to generate high-quality sources of random numbers. But this
solution to the problem makes no sense whatsoever. It's like swatting a
house fly with a nuclear weapon.
Weir says explicitly in the story that all the keno system needs is an
external source of high-quality random numbers. The next step is to go to
Amazon and buy a hardware random number generator. If you want to
splurge, it might cost you $250. Problem solved. Yes, hardware random
number generators have various limitations that may cause you problems if
you need millions of bits or you need them very quickly, but not for
something as dead-simple and with such low entropy requirements as keno
numbers! You need a trivial number of bits for each round; even the
slowest and most conservative hardware random number generator would be
fine. Hell, measure the noise levels on the casino floor.
Point a camera at a lava
lamp. Or just buy one of the physical ball machines they use for the
lottery. This problem is heavily researched, by casinos in
particular, and is not significantly changed by the availability of
quantum computers, at least for applications such as keno where the
generator can be reseeded before each generation.
You could maybe argue that this is an excuse for the IT guy to get his
hands on a quantum computer, which fits the stereotypes, but that still
breaks the story for reasons that would be spoilers. As soon as any other
casino thought about this, they'd laugh in the face of the characters.
I don't want to make too much of this, since anyone can write one bad
story, but this story was dire at every level. I still owe Weir a proper
chance at novel length, but I can't say this added to my enthusiasm. (2)
Rating: 4 out of 10
Armadillo is a powerful
and expressive C++ template library for linear algebra and scientific
computing. It aims towards a good balance between speed and ease of use,
has a syntax deliberately close to Matlab, and is useful for algorithm
development directly in C++, or quick conversion of research code into
production environments. RcppArmadillo
integrates this library with the R environment and language and is
widely used by (currently) 1034 packages other packages on CRAN, downloaded 27.6 million
times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint
/ vignette) by Conrad and myself has been cited 509 times according
to Google Scholar.
This release brings another upstream bugfix interation 11.4.3,
released in accordance with the aimed-for monthly release cadence. We
had hoped to move away from suppressing deprecation warnings in this
release, and had prepared over two dozen patch sets all well as pull
requests as documented in issue
#391. However, it turns out that we both missed with one or two
needed set of changes as well as two other sets of changes triggering
deprecation warnings. So we expanded issue
#391, and added issue
#402 and prepared another eleven pull requests and patches today.
With that we can hopefully remove the suppression of these warnings by
an expected late of late April.
The full set of changes (since the last CRAN release 0.11.4.2.1)
follows.
Changes
in RcppArmadillo version 0.11.4.3.1 (2023-01-14)
The #define ARMA_IGNORE_DEPRECATED_MARKER remains
active to suppress the (upstream) deprecation warnings, see #391 and
#402
for details.
Changes
in RcppArmadillo version 0.11.4.3.0 (2022-12-28) (GitHub Only)
Upgraded to Armadillo release 11.4.3 (Ship of Theseus)
fix corner case in pinv() when processing symmetric
matrices
Protect the undefine of NDEBUG behind additional
opt-in define